Goto

Collaborating Authors

 what-if tool


Can Uncertainty Quantification Enable Better Learning-based Index Tuning?

Yu, Tao, Zou, Zhaonian, Xiong, Hao

arXiv.org Artificial Intelligence

Index tuning is crucial for optimizing database performance by selecting optimal indexes based on workload. The key to this process lies in an accurate and efficient benefit estimator. Traditional methods relying on what-if tools often suffer from inefficiency and inaccuracy. In contrast, learning-based models provide a promising alternative but face challenges such as instability, lack of interpretability, and complex management. To overcome these limitations, we adopt a novel approach: quantifying the uncertainty in learning-based models' results, thereby combining the strengths of both traditional and learning-based methods for reliable index tuning. We propose Beauty, the first uncertainty-aware framework that enhances learning-based models with uncertainty quantification and uses what-if tools as a complementary mechanism to improve reliability and reduce management complexity. Specifically, we introduce a novel method that combines AutoEncoder and Monte Carlo Dropout to jointly quantify uncertainty, tailored to the characteristics of benefit estimation tasks. In experiments involving sixteen models, our approach outperformed existing uncertainty quantification methods in the majority of cases. We also conducted index tuning tests on six datasets. By applying the Beauty framework, we eliminated worst-case scenarios and more than tripled the occurrence of best-case scenarios.


Using the 'What-If Tool' to Investigate Machine Learning Models

#artificialintelligence

Let's now explore the capabilities of the WIT tool with an example. The example has been taken from the demos provided on the website and is called Income Classification wherein we need to predict whether a person earns more than $50k a year, based on their census information. The Dataset belongs to the UCI Census dataset consisting of a number of attributes such as age, marital status and education level. Let's begin by doing some Exploration of the dataset. Here is a link to the web Demo for following along.


Ensuring the Pentagon follows ethics for artificial intelligence

#artificialintelligence

In February, after more than a year consulting with a range of experts, the Department of Defense (DoD) released five principles for ethics around artificial intelligence (AI). If AI doesn't meet these standards, the Department has said, it won't be fielded. "The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," Secretary Mark Esper said in the news release. The principles, which apply to combat and non-combat functions, are that AI must be the following: responsible, equitable, traceable, reliable, and governable. Such guidelines are relatively high level, though, leaving individual departments and agencies on their own to implement what each adjective means for a specific use case.


Best Machine Learning Research of 2019

#artificialintelligence

The field of machine learning has continued to accelerate through 2019, moving at light speed with compelling new results coming out of academia and the research arms of large tech firms like Google, Microsoft, Yahoo, Facebook and many more. It's a daunting task for the down-in-the-trenches data scientist to keep pace. I advise my data science students at UCLA to be up on the latest research results in order to keep ahead of the pack. I recount how industry luminary Andrew Ng keeps his head above water by toting around a file of research papers (so when he has a free moment, like riding on an Uber, he can consume part of a paper). It does take time to add the research realm to your everyday duties, but I think it's fun to know what technologies are fertile areas of research.


Beginner's Guide To Explainable AI: Hands-On Introduction To What-If Tool

#artificialintelligence

Explainable AI or shortly XAI is a domain that deals with maintaining transparency to the decision making capability of complex machine learning models and algorithms. In this article, we will take a look at such a tool that is built for the purpose of making AI explainable. A simple way to understand this concept is to compare the decision-making process of humans with that of the machines. How do we humans come to a decision? We often make decisions whether they are small insignificant decisions like what outfit to wear for an event, to highly complex decisions that involve risks such as investments or loan approvals.


Google's new 'Explainable AI" (xAI) service

#artificialintelligence

Artificial intelligence is set to transform global productivity, working patterns, and lifestyles and create enormous wealth. Research firm Gartner expects the global AI economy to increase from about $1.2 trillion last year to about $3.9 Trillion by 2022, while McKinsey sees it delivering global economic activity of around $13 trillion by 2030. AI techniques, especially Deep Learning (DL) models are revolutionizing the business and technology world with jaw-dropping performances in one application area after another -- image classification, object detection, object tracking, pose recognition, video analytics, synthetic picture generation -- just to name a few. They are being used in -- healthcare, I.T. services, finance, manufacturing, autonomous driving, video game playing, scientific discovery, and even the criminal justice system. However, they are like anything but classical Machine Learning (ML) algorithms/techniques.


8 Explainable AI Frameworks Driving A New Paradigm For Transparency In AI

#artificialintelligence

Due to the ambiguity in Deep Learning solutions, there has been a lot of talk about how to make explainability inclusive of an ML pipeline. Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning and enables transparency. The first picture consists of a bunch of mathematical expressions chained together that represent the way inner layers of an algorithm or a neural network functions. Whereas, the second picture also contains the working of an algorithm but the message is more lucid.


Using the 'What-If Tool' to investigate Machine Learning models

#artificialintelligence

In this era of explainable and interpretable Machine Learning, one merely cannot be content with simply training the model and obtaining predictions from it. To be able to really make an impact and obtain good results, we should also be able to probe and investigate our models. Apart from that, algorithmic fairness constraints and bias should also be clearly kept in mind before going ahead with the model. Investigating a model requires asking a lot of questions and one needs to have an acumen of a detective to probe and look for issues and inconsistencies within the models. Also, such a task is usually complex requiring to write a lot of custom code.


Google's What-If Tool And The Future Of Explainable AI

#artificialintelligence

Art exhibition "Waterfall of Meaning" by Google PAIR displayed at the Barbican Curve Gallery. The rise of deep learning has been defined by a shift away from transparent and understandable human-written code towards sealed black boxes whose creators have little understanding of how or even why they yield the results they do. Concerns over bias, brittleness and flawed representations have led to growing interest in the area of "explainable AI" in which frameworks help interrogate a model's internal workings to shed light on precisely what it has learned about the world and help its developers nudge it towards a fairer and more faithful internal representation. As companies like Google roll out a growing stable of explainable AI tools like its What-If Tool, perhaps a more transparent and understandable deep learning future can help address the limitations that have slowed the field's deployment. Since the dawn of the computing revolution, the underlying programming that guided those mechanical thinking machines was provided by humans through transparent and visible instruction sets.


The What-If Tool: Interactive Probing of Machine Learning Models

Wexler, James, Pushkarna, Mahima, Bolukbasi, Tolga, Wattenberg, Martin, Viegas, Fernanda, Wilson, Jimbo

arXiv.org Machine Learning

A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.